energy level
Chemistry may not be the 'killer app' for quantum computers after all
Chemistry may not be the'killer app' for quantum computers after all Quantum chemistry calculations that could advance drug development or agriculture have recently emerged as a promising "killer application" of quantum computers, but a new analysis suggests this is unlikely to be the case. Progress in building quantum computers has greatly accelerated in recent years, but it remains an open question what uses are most likely to justify the ongoing investment in this technology. One popular contender is solving problems in quantum chemistry, such as calculating the energy levels of molecules relevant for biomedicine or industry. This requires accounting for the behavior of many quantum particles - electrons in the molecule - simultaneously, so it seems like a good match for computers made from many quantum parts. Quantum computers have finally arrived, but will they ever be useful? However, Xavier Waintal at CEA Grenoble in France and his colleagues have now shown that two leading quantum computing algorithms for this task may actually have, at best, limited use.
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.25)
- North America > United States (0.05)
- Europe > Switzerland > Zürich > Zürich (0.05)
- Information Technology > Hardware (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
Density of States Prediction of Crystalline Materials via Prompt-guided Multi-Modal Transformer Namkyeong Lee
That is, DOS is not solely determined by the crystalline material but also by the energy levels, which has been neglected in previous works. In this paper, we propose to integrate heterogeneous information obtained from the crystalline materials and the energies via a multi-modal transformer, thereby modeling the complex relationships between the atoms in the crystalline materials and various energy levels for DOS prediction. Moreover, we propose to utilize prompts to guide the model to learn the crystal structural system-specific interactions between crystalline materials and energies. Extensive experiments on two types of DOS, i.e., Phonon DOS and Electron DOS, with various real-world scenarios demonstrate the superiority of DOST ransformer .
- North America > United States (0.14)
- Asia > Middle East > Israel (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Energy (0.93)
- Health & Medicine (0.67)
- Materials (0.67)
Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction
Emerging neural reconstruction techniques based on tomography (e.g., NeRF, NeAT, and NeRP) have started showing unique capabilities in medical imaging. In this work, we present a novel Polychromatic neural representation (Polyner) to tackle the challenging problem of CT imaging when metallic implants exist within the human body. CT metal artifacts arise from the drastic variation of metal's attenuation coefficients at various energy levels of the X-ray spectrum, leading to a nonlinear metal effect in CT measurements. Recovering CT images from metal-affected measurements hence poses a complicated nonlinear inverse problem where empirical models adopted in previous metal artifact reduction (MAR) approaches lead to signal loss and strongly aliased reconstructions.
Understanding temperature tuning in energy-based models
Fields, Peter W, Ngampruetikorn, Vudtiwat, Schwab, David J, Palmer, Stephanie E
Energy-based models trained on evolutionary data can now generate novel protein sequences with custom functions [38]. A crucial, yet poorly understood, step in these successes is the use of an artificially low sampling "temperature" to produce functional sequences from the trained model. This adjustment is often the deciding factor between generating functional enzymes and inert polypeptides. A fundamental question arises as to what necessitates temperature tuning and what it reveals about the space of functional proteins and the limits of the models trained on finite data. Temperature tuning is a broadly used heuristic across machine learning contexts, used to improve training [16, 33, 34], generalization/generative performance [14, 45, 47, 48], and energy-landscape dynamics for memory retrieval [35]. It follows the basic intuition that one can navigate the trade-off between fidelity (producing believable, high-probability outputs at low temperature) and diversity (exploring a wide range of novel outputs at high temperature). Despite its widespread use, this practice lacks a principled, quantitative explanation and has not been systematically connected to known issues of the fitting procedure--particularly how it connects to fundamental limits in the learning process, such as biases introduced by training on finite data [5, 9, 10, 21, 22, 41].
- North America > United States > New York > New York County > New York City (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- (3 more...)
Identifying Quantum Structure in AI Language: Evidence for Evolutionary Convergence of Human and Artificial Cognition
Aerts, Diederik, Arguëlles, Jonito Aerts, Beltran, Lester, Geriente, Suzette, Leporini, Roberto, de Bianchi, Massimiliano Sassoli, Sozzo, Sandro
We present the results of cognitive tests on conceptual combinations, performed using specific Large Language Models (LLMs) as test subjects. In the first test, performed with ChatGPT and Gemini, we show that Bell's inequalities are significantly violated, which indicates the presence of 'quantum entanglement' in the tested concepts. In the second test, also performed using ChatGPT and Gemini, we instead identify the presence of 'Bose-Einstein statistics', rather than the intuitively expected 'Maxwell-Boltzmann statistics', in the distribution of the words contained in large-size texts. Interestingly, these findings mirror the results previously obtained in both cognitive tests with human participants and information retrieval tests on large corpora. Taken together, they point to the 'systematic emergence of quantum structures in conceptual-linguistic domains', regardless of whether the cognitive agent is human or artificial. Although LLMs are classified as neural networks for historical reasons, we believe that a more essential form of knowledge organization takes place in the distributive semantic structure of vector spaces built on top of the neural network. It is this meaning-bearing structure that lends itself to a phenomenon of evolutionary convergence between human cognition and language, slowly established through biological evolution, and LLM cognition and language, emerging much more rapidly as a result of self-learning and training. We analyze various aspects and examples that contain evidence supporting the above hypothesis. We also advance a unifying framework that explains the pervasive quantum organization of meaning that we identify.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (5 more...)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.46)
- Health & Medicine (0.47)
- Energy (0.45)
Skill-Aligned Fairness in Multi-Agent Learning for Collaboration in Healthcare
Ekpo, Promise Osaine, La, Brian, Wiener, Thomas, Agarwal, Saesha, Agrawal, Arshia, Gonzalez-Pumariega, Gonzalo, Molu, Lekan P., Taylor, Angelique
Fairness in multi-agent reinforcement learning (MARL) is often framed as a workload balance problem, overlooking agent expertise and the structured coordination required in real-world domains. In healthcare, equitable task allocation requires workload balance or expertise alignment to prevent burnout and overuse of highly skilled agents. Workload balance refers to distributing an approximately equal number of subtasks or equalised effort across healthcare workers, regardless of their expertise. We make two contributions to address this problem. First, we propose FairSkillMARL, a framework that defines fairness as the dual objective of workload balance and skill-task alignment. Second, we introduce MARLHospital, a customizable healthcare-inspired environment for modeling team compositions and energy-constrained scheduling impacts on fairness, as no existing simulators are well-suited for this problem. We conducted experiments to compare FairSkillMARL in conjunction with four standard MARL methods, and against two state-of-the-art fairness metrics. Our results suggest that fairness based solely on equal workload might lead to task-skill mismatches and highlight the need for more robust metrics that capture skill-task misalignment. Our work provides tools and a foundation for studying fairness in heterogeneous multi-agent systems where aligning effort with expertise is critical.
- North America > United States (0.05)
- Oceania > New Zealand (0.04)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Health Care Providers & Services (1.00)
- Leisure & Entertainment > Games > Computer Games (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- North America > United States (0.14)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Energy (0.93)
- Health & Medicine (0.67)
- Materials (0.67)
The DNA of nuclear models: How AI predicts nuclear masses
Richardson, Kate A., Trifinopoulos, Sokratis, Williams, Mike
Recently, many AI-based tools have shown promising results on this task, some achieving precision that surpasses the best physics models. However, the utility of these AI models remains in question given that predictions are only useful where measurements do not exist, which inherently requires extrapolation away from the training (and testing) samples. Since AI models are largely black boxes, the reliability of such an extrapolation is difficult to assess. For example, we find that (and explain why) the most important dimensions of its internal representation form a double helix, where the analog of the hydrogen bonds in DNA here link the number of protons and neutrons found in the most stable nucleus of each isotopic chain. Remarkably, the improvement of the AI model over symbolic ones can almost entirely be attributed to an observation made by Jaffe in 1969 based on the structure of most known nuclear ground states. The end result is a fully interpretable data-driven model of nuclear masses based on physics deduced by AI. Atomic nuclei consist of Z protons and N neutrons bound together by the strong nuclear force. Notably, many open problems in nuclear and (astro)particle physics are limited by a lack of precise knowledge of nuclear masses, either directly or indirectly via other quantities which require them as inputs. Experimentally, precise measurements have been made for the masses of (quasi)stable nuclei [9]; however, measurements of highly unstable nuclei are currently challenging, and thus, must be predicted using some combination of tractable theoretical calculations, e.g. using phenomeno-logical potentials, and empirical observations of other nuclei. Despite achieving an impressive level of precision, even the best such model is not sufficient to solve many open problems, e.g., r-process nucleosynthesis [10-12].
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Switzerland > Geneva > Geneva (0.04)
- Energy (0.67)
- Government > Regional Government (0.46)